Resolution, Nonlinearity, Constraints, Confidence, Design, Minimax and Bayes

نویسندگان

  • Philip B. Stark
  • Luis Tenorio
چکیده

Of those things that can be estimated well in an inverse problem, which are best to estimate? Backus-Gilbert resolution theory answers a version of this question for linear (or linearized) inverse problems in Hilbert spaces with additive zero-mean errors with known, finite covariance, and no constraints on the unknown other than the data. This paper extends Backus-Gilbert resolution: it defines the resolution and Bayes resolution of an estimator, intrinsic minimax and Bayes resolution, and intrinsic minimax and Bayes design resolution. Intrinsic resolution is the smallest value of a penalty among functionals that can be estimated with controlled (minimax or Bayes) risk. Intrinsic minimax resolution includes Backus-Gilbert resolution and subtractive optimally localized averages (SOLA) as special cases. Intrinsic design resolution is the smallest value of a penalty among functionals that can be estimated with controlled (minimax or Bayes) risk using observations with a controlled acquisition cost. Intrinsic resolution wraps the classical problem of choosing an optimal estimator of an abstract parameter inside the problem of choosing an optimal parameter to estimate. Intrinsic design resolution adds another layer: optimizing what to observe. The definitions apply to inverse problems with constraints, to nonlinear inverse problems, to nonlinear and biased estimators, to general measures of risk (not just the variance of unbiased estimators), and to abstract penalties not necessarily related to “spread.” For illustration, the resolution of “strict bounds” confidence intervals is defined.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalizing resolution

Of those things that can be estimated well in an inverse problem, which is best to estimate? Backus–Gilbert resolution theory answers a version of this question for linear (or linearized) inverse problems in Hilbert spaces with additive zero-mean errors with known, finite covariance, and no constraints on the unknown other than the data. This paper generalizes resolution: it defines the resolut...

متن کامل

Minimax Estimator of a Lower Bounded Parameter of a Discrete Distribution under a Squared Log Error Loss Function

The problem of estimating the parameter ?, when it is restricted to an interval of the form , in a class of discrete distributions, including Binomial Negative Binomial discrete Weibull and etc., is considered. We give necessary and sufficient conditions for which the Bayes estimator of with respect to a two points boundary supported prior is minimax under squared log error loss function....

متن کامل

Truncated Linear Minimax Estimator of a Power of the Scale Parameter in a Lower- Bounded Parameter Space

 Minimax estimation problems with restricted parameter space reached increasing interest within the last two decades Some authors derived minimax and admissible estimators of bounded parameters under squared error loss and scale invariant squared error loss In some truncated estimation problems the most natural estimator to be considered is the truncated version of a classic...

متن کامل

PAC-Bayes with Minimax for Confidence-Rated Transduction

We consider using an ensemble of binary classifiers for transductive prediction, when unlabeled test data are known in advance. We derive minimax optimal rules for confidence-rated prediction in this setting. By using PAC-Bayes analysis on these rules, we obtain data-dependent performance guarantees without distributional assumptions on the data. Our analysis techniques are readily extended to ...

متن کامل

Invariant Empirical Bayes Confidence Interval for Mean Vector of Normal Distribution and its Generalization for Exponential Family

Based on a given Bayesian model of multivariate normal with  known variance matrix we will find an empirical Bayes confidence interval for the mean vector components which have normal distribution. We will find this empirical Bayes confidence interval as a conditional form on ancillary statistic. In both cases (i.e.  conditional and unconditional empirical Bayes confidence interval), the empiri...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007